Aspect-level sentiment analysis aims to predict the sentiment polarity of specific target in given text. Aiming at the problem of ignoring the syntactic relationship between aspect words and context and reducing the attention difference caused by average pooling, an aspect-level sentiment analysis model based on Alternating-Attention (AA) mechanism and Graph Convolutional Network (AA-GCN) was proposed. Firstly, the Bidirectional Long Short-Term Memory (Bi-LSTM) network was used to semantically model context and aspect words. Secondly, the GCN based on syntactic dependency tree was used to learn location information and dependencies, and the AA mechanism was used for multi-level interactive learning to adaptively adjust the attention to the target word. Finally, the final classification basis was obtained by splicing the corrected aspect features and context features. Compared with the Target-Dependent Graph Attention Network (TD-GAT), the accuracies of the proposed model on four public datasets increased by 1.13%-2.67%, and the F1 values on five public datasets increased by 0.98%-4.89%, indicating the effectiveness of using syntactic relationships and increasing keyword attention.
U-shaped Network (U-Net) based on Fully Convolutional Network (FCN) is widely used as the backbone of medical image segmentation models, but Convolutional Neural Network (CNN) is not good at capturing long-range dependency, which limits the further performance improvement of segmentation models. To solve the above problem, researchers have applied Transformer to medical image segmentation models to make up for the deficiency of CNN, and U-shaped segmentation networks combining Transformer have become the hot research topics. After a detailed introduction of U-Net and Transformer, the related medical image segmentation models were categorized by the position in which the Transformer module was located, including only in the encoder or decoder, both in the encoder and decoder, as a skip-connection, and others, the basic contents, design concepts and possible improvement aspects about these models were discussed, the advantages and disadvantages of having Transformer in different positions were also analyzed. According to the analysis results, it can be seen that the biggest factor to decide the position of Transformer is the characteristics of the target segmentation task, and the segmentation models of Transformer combined with U-Net can make better use of the advantages of CNN and Transformer to improve segmentation performance of models, which has great development prospect and research value.
Aiming at the problem of low accuracy of ship target detection at sea, a lightweight ship target detection algorithm YOLOShip was proposed on the basis of the improved YOLOv5. Firstly, dilated convolution and channel attention were introduced into Spatial Pyramid Pooling-Fast (SPPF) module, which integrated spatial feature details of different scales, strengthened semantic information, and improved the model’s ability to distinguish foreground and background. Secondly, coordinate attention and lightweight mixed depthwise convolution were introduced into Feature Pyramid Network (FPN) and Path Aggregation Network (PAN) structures to strengthen important features in the network, obtain features with more detailed information, and improve model detection ability and positioning precision. Thirdly, considering the uneven distribution and relatively small scale changes of targets in the dataset, the model performance was further improved while the model was simplified by modifying the anchors and decreasing the number of detection heads. Finally, a more flexible Polynomial Loss (PolyLoss) was introduced to optimize Binary Cross Entropy Loss (BCE Loss) to improve the model convergence speed and model precision. Experimental results show that on dataset SeaShips, in comparison with YOLOv5s, YOLOShip has the Precision, Recall, mAP@0.5 and mAP@0.5:0.95 increased by 4.2, 5.7, 4.6 and 8.5 percentage points. Thus, by using the proposed algorithm, better detection precision can be obtained while meeting the requirements of detection speed, effectively achieving high-speed and high-precision ship detection.
Aiming at the problems of high computational complexity and poor expansibility in the mining process of partial periodic patterns from dynamic time series data, a partial periodic pattern mining algorithm for dynamic time series data combined with multi-scale theory, named MSI-PPPGrowth (Multi-Scale Incremental Partial Periodic Frequent Pattern) was proposed. In MSI-PPPGrowth, the objective multi-scale characteristics of time series data, were made full use, and the multi-scale theory was introduced in the mining process of partial periodic patterns from time series data. Firstly, both the original data after scale division and incremental time series data were used as a finer-grained benchmark scale dataset for independent mining. Then, the correlation between different scales was used to realize scale transformation, so as to indirectly obtain global frequent patterns corresponding to the dynamically updated dataset. Therefore, the repeated scanning of the original dataset and the constant adjustment of the tree structure were avoided. In which, a new frequent missing count estimation model PJK-EstimateCount was designed based on Kriging method considering the periodicity of time series to effectively estimate the frequent missing item support count in scale transformation. Experimental results show that MSI-PPPGrowth has good scalability and real-time performance. Especially for dense datasets, MSI-PPPGrowth has significant performance advantages.
The traditional text feature representation method cannot fully solve the polysemy problem of word. In order to solve the problem, a new text classification model combining word annotations was proposed. Firstly, by using the existing Chinese dictionary, the dictionary annotations of the text selected by the word context were obtained, and the Bidirectional Encoder Representations from Transformers (BERT) encoding was performed on them to generate the annotated sentence vectors. Then, the annotated sentence vectors were integrated with the word embedding vectors as the input layer to enrich the characteristic information of the input text. Finally, the Bidirectional Gated Recurrent Unit (BiGRU) was used to learn the characteristic information of the input text, and the attention mechanism was introduced to highlight the key feature vectors. Experimental results of text classification on public THUCNews dataset and Sina weibo sentiment classification dataset show that, the text classification models combining BERT word annotations have significantly improved performance compared to the text classification models without combining word annotations, the proposed BERT word annotation _BiGRU_Attention model has the highest precision and recall in all the experimental models for text classification, and has the F1-Score of reflecting the overall performance up to 98.16% and 96.52% respectively.
Concerning the low accuracy of tagging Chinese ambiguity words, a combined tagging method of rules and statistical model was proposed in this paper. Firstly, three kinds of traditional statistical models, including Hidden Markov Model (HMM), Maximum Entropy (ME) and Condition Random Field (CRF), were used to tagging problem of the ambiguity words. Then, the improved mutual information algorithm was applied to learn Part Of Speech (POS) tagging rules. Tagging rules were got through the calculation of correlation between the target words and the nearby word units. Finally, rules were combined with statistical model algorithm to tag Chinese ambiguity words. The experimental results show that after adding the rule algorithm, the average accuracy of POS tagging promotes by 5%.
ECMQV is an authenticated key exchange scheme based on conventional ECDH protocol,it possesses advantages on higher security and lower computation overhead. This paper implemented a WTLS protocol variant through the integration of ECMQV scheme into WTLS framework; the security of current WTLS protocol was greatly enhanced while only a little more computation overhead was incurred. The WTLS protocol variant can be deployed on lightweight wireless terminals and meet their high-security requirements under future enterprise remote access environments.